Goto

Collaborating Authors

 interesting question


Thank you for pointing to the interesting question regarding

Neural Information Processing Systems

We thank the reviewers for their valuable feedback, and we address raised questions and comments below. We will discuss these limitations explicitly in our revised text. In this work we learn only diffuse textures. We will clarify this in the text. We will better define this in the text.


Thank you for raising the interesting question on the conditions for asymptotic

Neural Information Processing Systems

This is achieved e.g. if a constant fraction of all samples lies on the point Theorem 3.3 by reformulating lines 190-191 as follows: "Furthermore, consider an infinite data stream of observations ". Making Theorem 3.3 quantitative as suggested by Reviewer #2 Although unbounded, they grow slow enough to allow the proof of Theorem 3.3 such that the main We will add a brief discussion on this in the updated paper. Reviewer #1 pointed out, that Assumption 3.1. Therefore, Assumption 3.1 is valid for our experimental setup. We will include the given reasoning in the updated paper.


Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Kate Candon is a PhD student at Yale University interested in understanding how we can create interactive agents that are more effectively able to help people. We spoke to Kate to find out more about how she is leveraging explicit and implicit feedback in human-robot interactions. Specifically I'm interested in how we can get robots to better learn from humans in the way that they naturally teach. Typically, a lot of work in robot learning is with a human teacher who is only tasked with giving explicit feedback to the robot, but they're not necessarily engaged in the task.


World's most advanced humanoid robot gives chilling response when asked if it's going to take our jobs

Daily Mail - Science & tech

As robots get more and more advanced, it's natural to worry that we'll all soon be replaced by machines in the workplace. But the world's most advanced humanoid robot has hardly allayed our fears. At Mobile World Congress (MWC) in Barcelona this week, MailOnline spoke with Ameca the bot, made by British firm Engineered Arts. MailOnline asked the sophisticated machine: 'Will robots take all our jobs?' Somewhat concerningly, the bot replies: 'I don't know, how good are you at your job?' She continued: 'It depends how good you are at it I suppose.'


AI evolution raising 'important questions' about ethics, Sony research scientist says

#artificialintelligence

Sony Group Corporation Global Head of AI Ethics & Sony AI Lead Research Scientist Alice Xiang speaks with Yahoo Finance tech reporter Allie Garfinkle at the CES 2023 event in Las Vegas about artificial intelligence, ethical data collection, and how AI factors into Sony games and products. No shortage of conversation or should I say chat about artificial intelligence at CES this year. AI is having a moment given the emergence of ChatGPT, and Yahoo Finance's Allie Garfinkle comes to us live from CVS with more on that. I am so excited to be here with Alice. Alice, let's start off by just talking about your top issues in AI this year.


Open AI gets GPT-3 to work by hiring an army of humans to fix GPT's bad answers. Interesting questions involving the mix of humans and computer algorithms in Open AI's GPT-3 program

#artificialintelligence

The InstructGPT research did recruit 40 contracters to generate a dataset that GPT-3 was then fine-tuned on. But I [Quach] don't think those contractors are employed on an ongoing process to edit responses generated by the model. A spokesperson from the company just confirmed to me: "OpenAI does not hire copywriters to edit generated answers," so I don't think the claims are correct." So the above post was misleading. I'd originally titled it, "Open AI gets GPT-3 to work by hiring an army of humans to fix GPT's bad answers." I changed it to "Interesting questions involving the mix of humans and computer algorithms in Open AI's GPT-3 program." I appreciate all the helpful comments! Stochastic algorithms are hard to understand, especially when they include tuning parameters. I'd still like to know whassup with Google's LaMDA chatbot (see item 2 in this post).


Philip Glass on Artificial Intelligence and Art

#artificialintelligence

This conversation with the composer Philip Glass and me discusses an exciting project in partnership with OpenAi, in which we trained a neural net on a corpus of Glass' work. He offers commentary on the music created by "his AI", as well as insights on composition and creating art. We then talk about the different limitations and capacities of humans and Artificial Intelligence–if and how neural nets can help us create art, appreciate art, and find the same things humans find meaningful. Due to the covid-19 pandemic, this call took place over video conference in December 2020. Art and tech are both captivating to me because they frame the elevation and the limitations of being human. Art is also closely intertwined with technological advancements, as movement shifting art seems predicated on tech. For example, the photography of Martin Munkacsi from the 1920s and 1930s revolutionized the art, as he is often credited for being the first photographer to explore dynamic and candid styles. The emergence and ability of these new forms of creation coincided with the technological advancements at the time that enabled flash and faster shutters–candid and spontaneous movement shots wouldn't have been technically possible to make with the cameras that existed before. The advancements in machine learning today, likewise, excite me for the possibilities and new forms in art and creation. The goal of this project is to explore the capacities of artificial intelligence as a new medium (or instrument or tool?) for art, and to create a collaborative music composition with Philip Glass and "his AI." More details about the project can be found below. Philip: Nice to see you.


2021: A year in AI (so far)

#artificialintelligence

If 2020 was the year of large language models and meta-learning, 2021 so far has been the year of large, multi-modal models that combine vision and text together. OpenAI's CLIP and DALL-E models have shown just how robust the combination of language modeling and vision can be. DALL-E in particular has shown itself to be capable of generating very impressive images based on user-specified text prompts. Presumably, there's much more to come in this area, including integrations with robotics and a continued push toward bringing AI into the physical world. New questions are being raised about when and how AI should be applied, given established problems with bias in AI algorithms.


McDonald's Replaces Drive-Thru Human Workers With Siri-Like AI - AI Summary

#artificialintelligence

The fast food giant has been testing out a Siri-like voice-recognition system at ten drive-thru locations in Chicago, CEO Chris Kempczinski revealed during a Wednesday investor conference attended by Nation's Restaurant News. The system can handle about 80 percent of the orders that come its way and fills them with about 85 percent accuracy -- probably annoying for the customers who just want to drive off with their burger -- but Kempczinski says a national rollout could happen in as soon as five years. It raises some interesting questions about the role that AI technology will play in various industries and, more importantly, the seemingly endless debate over whether raising the minimum wage to a livable salary will motivate CEOs to replace humans with machines -- or whether they'd do so to cut costs anyway. Part of the challenge in automating the drive-thru, Kempczinski said, is that human workers have been too eager to help out while supervising the technology that might one day replace them, preventing it from accruing the real-world data crucial for further improving the system. But as restaurant automation grows increasingly common, answering the question of how much responsibility a company has to continue employing people it could technically replace with machines will only grow more important and dire.


Protecting Computers and People From Viruses

Communications of the ACM

The COVID-19 pandemic highlights the virus analogy that gave rise to the use of the word "virus" from biology, to label a malicious program that attacks computer systems. The situation moves us to look into that, as another way to compare nature and artifact, and as an excuse to raise more abstract questions. We are moved also to stipulate that our mastery of both the biological and computational forms is shallow, and to invite other, better observations to follow. See Apvrille and Guillaume1 for greater depth and intriguing crossover speculation, Weis11 for yet more intriguing comparison, and Wenliang Du's website for detailed virus examples,3 which constitute dramatic reading for coders. A virus is generally not regarded as a living organism, but sometimes described as (similar to) software.